117 research outputs found

    Pretty Private Group Management

    Full text link
    Group management is a fundamental building block of today's Internet applications. Mailing lists, chat systems, collaborative document edition but also online social networks such as Facebook and Twitter use group management systems. In many cases, group security is required in the sense that access to data is restricted to group members only. Some applications also require privacy by keeping group members anonymous and unlinkable. Group management systems routinely rely on a central authority that manages and controls the infrastructure and data of the system. Personal user data related to groups then becomes de facto accessible to the central authority. In this paper, we propose a completely distributed approach for group management based on distributed hash tables. As there is no enrollment to a central authority, the created groups can be leveraged by various applications. Following this paradigm we describe a protocol for such a system. We consider security and privacy issues inherently introduced by removing the central authority and provide a formal validation of security properties of the system using AVISPA. We demonstrate the feasibility of this protocol by implementing a prototype running on top of Vuze's DHT

    Qu’est ce qu’un algorithme en boĂźte noire ? Tractatus des dĂ©cisions algorithmiques

    Get PDF
    Les entreprises ou institutions (DivinitĂ©s) dĂ©sintermĂ©dient leurs relations avec les utilisateurs via des algorithmes dĂ©cisionnels (Pythies) :les usagers (mortels) se voient proposer des dĂ©cisions arbitraires (oracles) lors de leurs interactions. Comment apprĂ©hender ces algorithmesen boĂźte noire, du point de vue d’un usager ou regulateur

    SONDe: ContrÎle de densité auto-organisante de fonctions réseaux pair à pair

    Get PDF
    http://algotel2006.lip6.fr/Longtemps dominés par les systÚmes de partage de fichiers, les systÚmes pair à pair s'ouvrent désormais à un large éventail d'applications telles que l'email, le DNS, la téléphonie, ou les caches répartis. Le bon fonctionnement de ces applications passe par l'utilisation de fonctions de base dont l'accÚs peut devenir un goulet d'étranglement si elle ne sont pas suffisamment répliquées dans le systÚme. Dans cet article nous présentons SONDe, un algorithme qui permet une réplication automatique et adaptative de ces fonctions dans un réseau à trÚs large échelle. Cet algorithme permet également de borner le nombre de sauts réseaux à effectuer entre un pair et une fonction, rendant ainsi prévisibles et paramétrables les latences attendues. Ceci est rendu possible grùce à une simple prise de décision basée sur l'étude du voisinage de chaque pair du réseau

    What is a black box algorithm?: Tractatus of algorithmic decision-making

    Get PDF
    v1.0International audienceCompanies or institutions (Divinities) disintermediate their relations with users via decision-making algorithms (Pythias): users (mortals) are offered arbitrary decisions (oracles) during their interactions.How to apprehend these black box algorithms, from the point of view of a user or regulator

    Evaluating topology quality through random walks

    Get PDF
    A distributed system or network can be modeled as a graph representing the "who knows who" relationship. The conductance of a graph expresses the quality of the connectivity. In a network composed of large dense clusters, connected through only a few links, the risk of partitioning is high; this is typically reflected by a low conductance of the graph. Computing the conductance of a graph is a complex and cumbersome task. Basically, it requires the full knowledge of the graph and is prohibitively expensive computation-wise. Beyond the information carried by the conductance of a graph, what really matters is to identify critical nodes from the topology point of view. In this paper we propose a fully decentralized algorithm to provide each node with a value reflecting its connectivity quality. Comparing these values between nodes, enables to have a local approximation of a global characteristic of the graph. Our algorithm relies on an anonymous probe visiting the network in a unbiased random fashion. Each node records the time elapsed between visits of the probe (called return time in the sequel). Computing the standard deviation of such return times enables to give an information to all system nodes, information that may be used by those nodes to assess their relative position, and therefore the fact that they are critical, in a graph exhibiting low conductance. Based on this information, graph improvement algorithms may be triggered. Moments of order 1 and 2 of the return times are evaluated analytically using a Markov chain model, showing that standard deviation of return time is related to the position of nodes in the graph. We evaluated our algorithm through simulations. Results show that our algorithm is able give informations that are correlated to the conductance of the graph. For example we were able to precisely detect bridges in a network composed of two dense clusters connected through a single link

    On the relevance of APIs facing fairwashed audits

    Full text link
    Recent legislation required AI platforms to provide APIs for regulators to assess their compliance with the law. Research has nevertheless shown that platforms can manipulate their API answers through fairwashing. Facing this threat for reliable auditing, this paper studies the benefits of the joint use of platform scraping and of APIs. In this setup, we elaborate on the use of scraping to detect manipulated answers: since fairwashing only manipulates API answers, exploiting scraps may reveal a manipulation. To abstract the wide range of specific API-scrap situations, we introduce a notion of proxy that captures the consistency an auditor might expect between both data sources. If the regulator has a good proxy of the consistency, then she can easily detect manipulation and even bypass the API to conduct her audit. On the other hand, without a good proxy, relying on the API is necessary, and the auditor cannot defend against fairwashing. We then simulate practical scenarios in which the auditor may mostly rely on the API to conveniently conduct the audit task, while maintaining her chances to detect a potential manipulation. To highlight the tension between the audit task and the API fairwashing detection task, we identify Pareto-optimal strategies in a practical audit scenario. We believe this research sets the stage for reliable audits in practical and manipulation-prone setups.Comment: 18 pages, 7 figure

    Finding Good Partners in Availability-aware P2P Networks

    Get PDF
    In this paper, we study the problem of finding peers matching a given availability pattern in a peer-to-peer (P2P) system. We first prove the existence of such patterns in a new trace of the eDonkey network, containing the sessions of 14M peers over 27 days. We also show that, using only 7 days of history, a simple predictor can select predictable peers and successfully predict their online periods for the next week. Then, motivated by practical examples, we specify two formal problems of availability matching that arise in real applications: disconnection matching, where peers look for partners expected to disconnect at the same time, and presence matching, where peers look for partners expected to be online simultaneously in the future. As a scalable and inexpensive solution, we propose to use epidemic protocols for topology management, such as T-Man; we provide corresponding metrics for both matching problems. Finally, we evaluated this solution by simulating two P2P applications over our real trace: task scheduling and file storage. Simulations showed that our simple solution provided good partners fast enough to match the needs of both applications, and that consequently, these applications performed as efficiently at a much lower cost. We believe that this work will be useful for many P2P applications for which it has been shown that choosing good partners, based on their availability, drastically improves their efficiency

    A Self-organising Isolated Anomaly Detection Architecture for Large Scale Systems

    Get PDF
    International audienceMonitoring a system is the ability of collecting and analyzing relevant information provided by the monitored devices so as to be continuously aware of the system state. However, the ever growing complexity and scale of systems makes both real time monitoring and fault detection a quite tedious task. Thus the usually adopted option is to focus solely on a subset of information states, so as to provide coarse-grained indicators. As a consequence, detecting isolated failures or anomalies is a quite challenging issue. In this work, we propose to address this issue by pushing the monitoring task at the edge of the network. We present a peer-to-peer based architecture, which enables nodes to adaptively and efficiently self-organize according to their ''health'' indicators. By exploiting both temporal and spatial correlations that exist between a device and its vicinity, our approach guarantees that only isolated anomalies (an anomaly is isolated if it impacts solely a monitored device) are reported on the fly to the network operator. We show that the end-to-end detection process, i.e., from the local detection to the management operator reporting, requires a logarithmic number of messages in the size of the network
    • 

    corecore